Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 144.205
Filtrar
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38434231

RESUMEN

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Asunto(s)
Técnicas Histológicas , Microscopía , Animales , Citometría de Flujo , Procesamiento de Imagen Asistido por Computador
2.
Opt Express ; 32(7): 11934-11951, 2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38571030

RESUMEN

Optical coherence tomography (OCT) can resolve biological three-dimensional tissue structures, but it is inevitably plagued by speckle noise that degrades image quality and obscures biological structure. Recently unsupervised deep learning methods are becoming more popular in OCT despeckling but they still have to use unpaired noisy-clean images or paired noisy-noisy images. To address the above problem, we propose what we believe to be a novel unsupervised deep learning method for OCT despeckling, termed Double-free Net, which eliminates the need for ground truth data and repeated scanning by sub-sampling noisy images and synthesizing noisier images. In comparison to existing unsupervised methods, Double-free Net obtains superior denoising performance when trained on datasets comprising retinal and human tissue images without clean images. The efficacy of Double-free Net in denoising holds significant promise for diagnostic applications in retinal pathologies and enhances the accuracy of retinal layer segmentation. Results demonstrate that Double-free Net outperforms state-of-the-art methods and exhibits strong convenience and adaptability across different OCT images.


Asunto(s)
Algoritmos , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Retina/diagnóstico por imagen , Cintigrafía , Procesamiento de Imagen Asistido por Computador/métodos
3.
Sci Data ; 11(1): 330, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38570515

RESUMEN

Variations in color and texture of histopathology images are caused by differences in staining conditions and imaging devices between hospitals. These biases decrease the robustness of machine learning models exposed to out-of-domain data. To address this issue, we introduce a comprehensive histopathology image dataset named PathoLogy Images of Scanners and Mobile phones (PLISM). The dataset consisted of 46 human tissue types stained using 13 hematoxylin and eosin conditions and captured using 13 imaging devices. Precisely aligned image patches from different domains allowed for an accurate evaluation of color and texture properties in each domain. Variation in PLISM was assessed and found to be significantly diverse across various domains, particularly between whole-slide images and smartphones. Furthermore, we assessed the improvement in domain shift using a convolutional neural network pre-trained on PLISM. PLISM is a valuable resource that facilitates the precise evaluation of domain shifts in digital pathology and makes significant contributions towards the development of robust machine learning models that can effectively address challenges of domain shift in histological image analysis.


Asunto(s)
Técnicas Histológicas , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Redes Neurales de la Computación , Coloración y Etiquetado , Humanos , Eosina Amarillenta-(YS) , Procesamiento de Imagen Asistido por Computador/métodos , Histología
4.
J Biomed Opt ; 29(4): 046001, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38585417

RESUMEN

Significance: Endoscopic screening for esophageal cancer (EC) may enable early cancer diagnosis and treatment. While optical microendoscopic technology has shown promise in improving specificity, the limited field of view (<1 mm) significantly reduces the ability to survey large areas efficiently in EC screening. Aim: To improve the efficiency of endoscopic screening, we propose a novel concept of end-expandable endoscopic optical fiber probe for larger field of visualization and for the first time evaluate a deep-learning-based image super-resolution (DL-SR) method to overcome the issue of limited sampling capability. Approach: To demonstrate feasibility of the end-expandable optical fiber probe, DL-SR was applied on simulated low-resolution microendoscopic images to generate super-resolved (SR) ones. Varying the degradation model of image data acquisition, we identified the optimal parameters for optical fiber probe prototyping. The proposed screening method was validated with a human pathology reading study. Results: For various degradation parameters considered, the DL-SR method demonstrated different levels of improvement of traditional measures of image quality. The endoscopists' interpretations of the SR images were comparable to those performed on the high-resolution ones. Conclusions: This work suggests avenues for development of DL-SR-enabled sparse image reconstruction to improve high-yield EC screening and similar clinical applications.


Asunto(s)
Esófago de Barrett , Aprendizaje Profundo , Neoplasias Esofágicas , Humanos , Fibras Ópticas , Neoplasias Esofágicas/diagnóstico por imagen , Esófago de Barrett/patología , Procesamiento de Imagen Asistido por Computador
5.
Sci Rep ; 14(1): 8253, 2024 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589478

RESUMEN

This work presents a deep learning approach for rapid and accurate muscle water T2 with subject-specific fat T2 calibration using multi-spin-echo acquisitions. This method addresses the computational limitations of conventional bi-component Extended Phase Graph fitting methods (nonlinear-least-squares and dictionary-based) by leveraging fully connected neural networks for fast processing with minimal computational resources. We validated the approach through in vivo experiments using two different MRI vendors. The results showed strong agreement of our deep learning approach with reference methods, summarized by Lin's concordance correlation coefficients ranging from 0.89 to 0.97. Further, the deep learning method achieved a significant computational time improvement, processing data 116 and 33 times faster than the nonlinear least squares and dictionary methods, respectively. In conclusion, the proposed approach demonstrated significant time and resource efficiency improvements over conventional methods while maintaining similar accuracy. This methodology makes the processing of water T2 data faster and easier for the user and will facilitate the utilization of the use of a quantitative water T2 map of muscle in clinical and research studies.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Agua , Calibración , Imagen por Resonancia Magnética/métodos , Músculos/diagnóstico por imagen , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo
6.
BMC Med Imaging ; 24(1): 83, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589793

RESUMEN

The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. The leukocyte dataset comprises four classes of images such as monocytes, lymphocytes, eosinophils, and neutrophils. Leukocyte segmentation is achieved through image processing techniques, including background subtraction, noise removal, and contouring. To get isolated leukocytes, background mask creation, Erythrocytes mask creation, and Leukocytes mask creation are performed on the blood cell images. Isolated leukocytes are then subjected to data augmentation including brightness and contrast adjustment, flipping, and random shearing, to improve the generalizability of the CNN model. A deep Convolutional Neural Network (CNN) model is employed on augmented dataset for effective feature extraction and classification. The deep CNN model consists of four convolutional blocks having eleven convolutional layers, eight batch normalization layers, eight Rectified Linear Unit (ReLU) layers, and four dropout layers to capture increasingly complex patterns. For this research, a publicly available dataset from Kaggle consisting of a total of 12,444 images of four types of leukocytes was used to conduct the experiments. Results showcase the robustness of the proposed framework, achieving impressive performance metrics with an accuracy of 97.98% and precision of 97.97%. These outcomes affirm the efficacy of the devised segmentation and classification approach in accurately identifying and categorizing leukocytes. The combination of advanced CNN architecture and meticulous pre-processing steps establishes a foundation for future developments in the field of medical image analysis.


Asunto(s)
Aprendizaje Profundo , Humanos , Curaduría de Datos , Leucocitos , Redes Neurales de la Computación , Células Sanguíneas , Procesamiento de Imagen Asistido por Computador/métodos
7.
Sci Rep ; 14(1): 8348, 2024 04 09.
Artículo en Inglés | MEDLINE | ID: mdl-38594373

RESUMEN

Single molecule fluorescence in situ hybridisation (smFISH) has become a valuable tool to investigate the mRNA expression of single cells. However, it requires a considerable amount of programming expertise to use currently available open-source analytical software packages to extract and analyse quantitative data about transcript expression. Here, we present FISHtoFigure, a new software tool developed specifically for the analysis of mRNA abundance and co-expression in QuPath-quantified, multi-labelled smFISH data. FISHtoFigure facilitates the automated spatial analysis of transcripts of interest, allowing users to analyse populations of cells positive for specific combinations of mRNA targets without the need for computational image analysis expertise. As a proof of concept and to demonstrate the capabilities of this new research tool, we have validated FISHtoFigure in multiple biological systems. We used FISHtoFigure to identify an upregulation in the expression of Cd4 by T-cells in the spleens of mice infected with influenza A virus, before analysing more complex data showing crosstalk between microglia and regulatory B-cells in the brains of mice infected with Trypanosoma brucei brucei. These analyses demonstrate the ease of analysing cell expression profiles using FISHtoFigure and the value of this new tool in the field of smFISH data analysis.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Programas Informáticos , Animales , Ratones , ARN Mensajero/metabolismo , Hibridación Fluorescente in Situ/métodos , Regulación hacia Arriba
9.
Med Image Anal ; 94: 103153, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569380

RESUMEN

Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.


Asunto(s)
Diabetes Mellitus , Pie Diabético , Humanos , Pie Diabético/diagnóstico por imagen , Redes Neurales de la Computación , Benchmarking , Procesamiento de Imagen Asistido por Computador/métodos
10.
Med Image Anal ; 94: 103158, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569379

RESUMEN

Magnetic resonance (MR) images collected in 2D clinical protocols typically have large inter-slice spacing, resulting in high in-plane resolution and reduced through-plane resolution. Super-resolution technique can enhance the through-plane resolution of MR images to facilitate downstream visualization and computer-aided diagnosis. However, most existing works train the super-resolution network at a fixed scaling factor, which is not friendly to clinical scenes of varying inter-slice spacing in MR scanning. Inspired by the recent progress in implicit neural representation, we propose a Spatial Attention-based Implicit Neural Representation (SA-INR) network for arbitrary reduction of MR inter-slice spacing. The SA-INR aims to represent an MR image as a continuous implicit function of 3D coordinates. In this way, the SA-INR can reconstruct the MR image with arbitrary inter-slice spacing by continuously sampling the coordinates in 3D space. In particular, a local-aware spatial attention operation is introduced to model nearby voxels and their affinity more accurately in a larger receptive field. Meanwhile, to improve the computational efficiency, a gradient-guided gating mask is proposed for applying the local-aware spatial attention to selected areas only. We evaluate our method on the public HCP-1200 dataset and the clinical knee MR dataset to demonstrate its superiority over other existing methods.


Asunto(s)
Diagnóstico por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Articulación de la Rodilla , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos
11.
Med Image Anal ; 94: 103149, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574542

RESUMEN

The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.


Asunto(s)
Colorantes , Neoplasias , Humanos , Colorantes/química , Coloración y Etiquetado , Algoritmos , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador/métodos
12.
Med Image Anal ; 94: 103150, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574545

RESUMEN

Self-supervised representation learning can boost the performance of a pre-trained network on downstream tasks for which labeled data is limited. A popular method based on this paradigm, known as contrastive learning, works by constructing sets of positive and negative pairs from the data, and then pulling closer the representations of positive pairs while pushing apart those of negative pairs. Although contrastive learning has been shown to improve performance in various classification tasks, its application to image segmentation has been more limited. This stems in part from the difficulty of defining positive and negative pairs for dense feature maps without having access to pixel-wise annotations. In this work, we propose a novel self-supervised pre-training method that overcomes the challenges of contrastive learning in image segmentation. Our method leverages Information Invariant Clustering (IIC) as an unsupervised task to learn a local representation of images in the decoder of a segmentation network, but addresses three important drawbacks of this approach: (i) the difficulty of optimizing the loss based on mutual information maximization; (ii) the lack of clustering consistency for different random transformations of the same image; (iii) the poor correspondence of clusters obtained by IIC with region boundaries in the image. Toward this goal, we first introduce a regularized mutual information maximization objective that encourages the learned clusters to be balanced and consistent across different image transformations. We also propose a boundary-aware loss based on cross-correlation, which helps the learned clusters to be more representative of important regions in the image. Compared to contrastive learning applied in dense features, our method does not require computing positive and negative pairs and also enhances interpretability through the visualization of learned clusters. Comprehensive experiments involving four different medical image segmentation tasks reveal the high effectiveness of our self-supervised representation learning method. Our results show the proposed method to outperform by a large margin several state-of-the-art self-supervised and semi-supervised approaches for segmentation, reaching a performance close to full supervision with only a few labeled examples.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje , Humanos , Aprendizaje Automático Supervisado
13.
Med Image Anal ; 94: 103157, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574544

RESUMEN

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Asunto(s)
Diagnóstico por Computador , Redes Neurales de la Computación , Humanos , Diagnóstico por Computador/métodos , Endoscopía Gastrointestinal , Procesamiento de Imagen Asistido por Computador/métodos
14.
Sci Data ; 11(1): 366, 2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38605079

RESUMEN

Radiomics features (RFs) studies have showed limitations in the reproducibility of RFs in different acquisition settings. To date, reproducibility studies using CT images mainly rely on phantoms, due to the harness of patient exposure to X-rays. The provided CadAIver dataset has the aims of evaluating how CT scanner parameters effect radiomics features on cadaveric donor. The dataset comprises 112 unique CT acquisitions of a cadaveric truck acquired on 3 different CT scanners varying KV, mA, field-of-view, and reconstruction kernel settings. Technical validation of the CadAIver dataset comprises a comprehensive univariate and multivariate GLM approach to assess stability of each RFs extracted from lumbar vertebrae. The complete dataset is publicly available to be applied for future research in the RFs field, and could foster the creation of a collaborative open CT image database to increase the sample size, the range of available scanners, and the available body districts.


Asunto(s)
Vértebras Lumbares , Tomografía Computarizada por Rayos X , Humanos , Cadáver , Procesamiento de Imagen Asistido por Computador/métodos , Vértebras Lumbares/diagnóstico por imagen , 60570 , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos
15.
Comput Biol Med ; 173: 108370, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38564854

RESUMEN

The transformer architecture has achieved remarkable success in medical image analysis owing to its powerful capability for capturing long-range dependencies. However, due to the lack of intrinsic inductive bias in modeling visual structural information, the transformer generally requires a large-scale pre-training schedule, limiting the clinical applications over expensive small-scale medical data. To this end, we propose a slimmable transformer to explore intrinsic inductive bias via position information for medical image segmentation. Specifically, we empirically investigate how different position encoding strategies affect the prediction quality of the region of interest (ROI) and observe that ROIs are sensitive to different position encoding strategies. Motivated by this, we present a novel Hybrid Axial-Attention (HAA) that can be equipped with pixel-level spatial structure and relative position information as inductive bias. Moreover, we introduce a gating mechanism to achieve efficient feature selection and further improve the representation quality over small-scale datasets. Experiments on LGG and COVID-19 datasets prove the superiority of our method over the baseline and previous works. Internal workflow visualization with interpretability is conducted to validate our success better; the proposed slimmable transformer has the potential to be further developed into a visual software tool for improving computer-aided lesion diagnosis and treatment planning.


Asunto(s)
COVID-19 , Humanos , COVID-19/diagnóstico por imagen , Diagnóstico por Computador , Programas Informáticos , Flujo de Trabajo , Procesamiento de Imagen Asistido por Computador
16.
Comput Biol Med ; 173: 108377, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569233

RESUMEN

Observing cortical vascular structures and functions using laser speckle contrast imaging (LSCI) at high resolution plays a crucial role in understanding cerebral pathologies. Usually, open-skull window techniques have been applied to reduce scattering of skull and enhance image quality. However, craniotomy surgeries inevitably induce inflammation, which may obstruct observations in certain scenarios. In contrast, image enhancement algorithms provide popular tools for improving the signal-to-noise ratio (SNR) of LSCI. The current methods were less than satisfactory through intact skulls because the transcranial cortical images were of poor quality. Moreover, existing algorithms do not guarantee the accuracy of dynamic blood flow mappings. In this study, we develop an unsupervised deep learning method, named Dual-Channel in Spatial-Frequency Domain CycleGAN (SF-CycleGAN), to enhance the perceptual quality of cortical blood flow imaging by LSCI. SF-CycleGAN enabled convenient, non-invasive, and effective cortical vascular structure observation and accurate dynamic blood flow mappings without craniotomy surgeries to visualize biodynamics in an undisturbed biological environment. Our experimental results showed that SF-CycleGAN achieved a SNR at least 4.13 dB higher than that of other unsupervised methods, imaged the complete vascular morphology, and enabled the functional observation of small cortical vessels. Additionally, the proposed method showed remarkable robustness and could be generalized to various imaging configurations and image modalities, including fluorescence images, without retraining.


Asunto(s)
Hemodinámica , Aumento de la Imagen , Aumento de la Imagen/métodos , Cráneo/diagnóstico por imagen , Flujo Sanguíneo Regional/fisiología , Cabeza , Procesamiento de Imagen Asistido por Computador/métodos
17.
Comput Biol Med ; 173: 108390, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569234

RESUMEN

Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.


Asunto(s)
Imagenología Tridimensional , Neoplasias , Humanos , Imagenología Tridimensional/métodos , Rayos X , Radiografía , Neoplasias/diagnóstico por imagen , Respiración , Procesamiento de Imagen Asistido por Computador/métodos
18.
Comput Biol Med ; 173: 108388, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569235

RESUMEN

The COVID-19 pandemic has resulted in hundreds of million cases and numerous deaths worldwide. Here, we develop a novel classification network CECT by controllable ensemble convolutional neural network and transformer to provide a timely and accurate COVID-19 diagnosis. The CECT is composed of a parallel convolutional encoder block, an aggregate transposed-convolutional decoder block, and a windowed attention classification block. Each block captures features at different scales from 28 × 28 to 224 × 224 from the input, composing enriched and comprehensive information. Different from existing methods, our CECT can capture features at both multi-local and global scales without any sophisticated module design. Moreover, the contribution of local features at different scales can be controlled with the proposed ensemble coefficients. We evaluate CECT on two public COVID-19 datasets and it reaches the highest accuracy of 98.1% in the intra-dataset evaluation, outperforming existing state-of-the-art methods. Moreover, the developed CECT achieves an accuracy of 90.9% on the unseen dataset in the inter-dataset evaluation, showing extraordinary generalization ability. With remarkable feature capture ability and generalization ability, we believe CECT can be extended to other medical scenarios as a powerful diagnosis tool. Code is available at https://github.com/NUS-Tim/CECT.


Asunto(s)
COVID-19 , Humanos , Prueba de COVID-19 , Pandemias , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador
19.
Comput Biol Med ; 173: 108381, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569237

RESUMEN

Multimodal medical image fusion (MMIF) technology plays a crucial role in medical diagnosis and treatment by integrating different images to obtain fusion images with comprehensive information. Deep learning-based fusion methods have demonstrated superior performance, but some of them still encounter challenges such as imbalanced retention of color and texture information and low fusion efficiency. To alleviate the above issues, this paper presents a real-time MMIF method, called a lightweight residual fusion network. First, a feature extraction framework with three branches is designed. Two independent branches are used to fully extract brightness and texture information. The fusion branch enables different modal information to be interactively fused at a shallow level, thereby better retaining brightness and texture information. Furthermore, a lightweight residual unit is designed to replace the conventional residual convolution in the model, thereby improving the fusion efficiency and reducing the overall model size by approximately 5 times. Finally, considering that the high-frequency image decomposed by the wavelet transform contains abundant edge and texture information, an adaptive strategy is proposed for assigning weights to the loss function based on the information content in the high-frequency image. This strategy effectively guides the model toward preserving intricate details. The experimental results on MRI and functional images demonstrate that the proposed method exhibits superior fusion performance and efficiency compared to alternative approaches. The code of LRFNet is available at https://github.com/HeDan-11/LRFNet.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Análisis de Ondículas
20.
Comput Biol Med ; 173: 108293, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574528

RESUMEN

Accurately identifying the Kirsten rat sarcoma virus (KRAS) gene mutation status in colorectal cancer (CRC) patients can assist doctors in deciding whether to use specific targeted drugs for treatment. Although deep learning methods are popular, they are often affected by redundant features from non-lesion areas. Moreover, existing methods commonly extract spatial features from imaging data, which neglect important frequency domain features and may degrade the performance of KRAS gene mutation status identification. To address this deficiency, we propose a segmentation-guided Transformer U-Net (SG-Transunet) model for KRAS gene mutation status identification in CRC. Integrating the strength of convolutional neural networks (CNNs) and Transformers, SG-Transunet offers a unique approach for both lesion segmentation and KRAS mutation status identification. Specifically, for precise lesion localization, we employ an encoder-decoder to obtain segmentation results and guide the KRAS gene mutation status identification task. Subsequently, a frequency domain supplement block is designed to capture frequency domain features, integrating it with high-level spatial features extracted in the encoding path to derive advanced spatial-frequency domain features. Furthermore, we introduce a pre-trained Xception block to mitigate the risk of overfitting associated with small-scale datasets. Following this, an aggregate attention module is devised to consolidate spatial-frequency domain features with global information extracted by the Transformer at shallow and deep levels, thereby enhancing feature discriminability. Finally, we propose a mutual-constrained loss function that simultaneously constrains the segmentation mask acquisition and gene status identification process. Experimental results demonstrate the superior performance of SG-Transunet over state-of-the-art methods in discriminating KRAS gene mutation status.


Asunto(s)
Neoplasias Colorrectales , Proteínas Proto-Oncogénicas p21(ras) , Humanos , Proteínas Proto-Oncogénicas p21(ras)/genética , Sistemas de Liberación de Medicamentos , Mutación/genética , Redes Neurales de la Computación , Neoplasias Colorrectales/diagnóstico por imagen , Neoplasias Colorrectales/genética , Procesamiento de Imagen Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...